Precision and Disclosure in Text and Voice Interviews on Smartphones, United States, 2012 (ICPSR 37837)

Version Date: Oct 8, 2020 View help for published

Principal Investigator(s): View help for Principal Investigator(s)
Frederick G. Conrad, University of Michigan. Institute for Social Research. Survey Research Center; Michael F. Schober, The New School for Social Research (New York, N.Y.: 2005- ). Department of Psychology

https://doi.org/10.3886/ICPSR37837.v1

Version V1

Slide tabs to view more

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., short message service (SMS)), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. This dataset contains 1,282 cases, 634 cases that completed an interview and 648 cases that were invited to participate, but did not start or complete an interview on their iPhone. Participants were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data--fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information--than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. Demographic variables include participants' gender, race, education level, and household income.

Conrad, Frederick G., and Schober, Michael F. Precision and Disclosure in Text and Voice Interviews on Smartphones, United States, 2012. Inter-university Consortium for Political and Social Research [distributor], 2020-10-08. https://doi.org/10.3886/ICPSR37837.v1

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote
National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1025645, SES-1026225)

Country

Inter-university Consortium for Political and Social Research
Hide

2012-03-28 -- 2012-05-03
2012-03-28 -- 2012-05-03
  1. This study was originally released in OpenICPSR, and can be found here.
  2. This study is related to ICPSR 37836 and ICPSR 37846.
Hide

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. The study's key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information.

This dataset contains 1,282 cases, 634 cases that completed an interview and 648 cases that were invited to participate, but did not start or complete an interview on their iPhone. Participants were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched.

The questionnaire contained 32 questions after answering "yes" to a "safe-to-talk question". Twenty five questions came from ongoing social surveys (Behavioral Risk Factor Surveillance System [BRFSS], the National Survey on Drug Use and Health [NSDUH], the National Health Interview Survey [NHIS], the General Social Survey [GSS], and published methodological studies [Conrad, Brown, and Cashman 1998; Tourangeau, Couper, and Conrad 2007] administered in the United States. Five questions were developed for this study. Two questions were asked a second time with a definition presented.

A convenience sample of iPhone users was recruited from Craigslist, Facebook, Google Ads, and Amazon Mechanical Turk. They were asked to complete a screening questionnaire to determine if they were eligible to participate. To be eligible, one needed to be 21 or older, and own an iPhone with a US area code. Participants recruited were not intended to represent the US population, iPhone users, or smartphone users. The sample was designed to test experimental manipulations through random assignments and conditions on a consistent platform. Eligible participants who provided a telephone number in the screener were sent a text message with a link to a web page. The web page captured the user-agent string to determine if the device was an iPhone. Once eligible, phone numbers were assigned an interview mode. Once the interview had been completed, respondents were sent a link via text message to a post-interview debriefing questionnaire concerning their experience. At the conclusion of the post-interview debriefing, respondents were sent a text message with a $20 iTunes gift code as a token of appreciation for their time.

Cross-sectional

iPhone users 21 years of age and older

Individual

The response rate below was calculated using American Association for Public Opinion Research Response Rate 2 (AAPOR RR2).

46.4% (654/1,409) - Mode Choice

Hide

2020-10-08

2020-10-08 ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:

  • Created variable labels and/or value labels.
  • Created online analysis version with question text.
  • Checked for undocumented or out-of-range codes.

Hide

Notes